RDG for DPF Zero Trust (DPF-ZT) with OVN VPC DPU service Home

DPU Service Installation

Before deploying the objects under docs/public/user-guides/vpc_only/directory, a few adjustments are required.

Modify the variables to fit your environment, then source the file:

Warning

Replace the values for the variables in the following file with the values that fit your setup. Specifically, pay attention to VTEP_CIDR,VTEP_GATEWAY,EXTERNAL_CIDR and EXTERNAL_GATEWAY

manifests/00-env-vars/envvars.env

Copy
Copied!
            

## IP Address for one of the control plane nodes in the host k8s cluster. ## This should never include a scheme or a port. ## e.g. 10.10.10.10 # this IP is used for ovn-controller to access ovn-central which is exposed as nodePORT export TARGETCLUSTER_OVN_CENTRAL_IP=10.0.110.10   ## IP address range for VTEPs used by VPC OVN Service on the high speed fabric. ## This is a CIDR in the form e.g. 20.20.0.0/16 export VTEP_CIDR=20.20.0.0/16   ## The Gateway address of the VTEP subnet ## This is an IP in the form e.g. 20.20.0.1 export VTEP_GATEWAY=20.20.0.1   ## IP address range for external network used by VPC OVN Service on the high speed fabric. ## This is a CIDR in the form e.g. 30.30.0.0/16 export EXTERNAL_CIDR=10.0.123.254/22   ## The Gateway address of the external subnet ## This is an IP in the form e.g. 30.30.0.1 export EXTERNAL_GATEWAY=10.0.123.254   ## The repository URL for the NVIDIA Helm chart registry. ## Usually this is the NVIDIA Helm NGC registry. For development purposes, this can be set to a different repository. export HELM_REGISTRY_REPO_URL=oci://harbor.mellanox.com/cloud-orchestration-dev/dpf   ## The DPF TAG is the version of the DPF components which will be deployed in this guide. export TAG=v25.7.0-beta.3   ## URL to the BFB used in the `bfb.yaml` and linked by the DPUSet. export BLUEFIELD_BITSTREAM="https://content.mellanox.com/BlueField/BFBs/Ubuntu22.04/bf-bundle-3.0.0-135_25.04_ubuntu-22.04_prod.bfb"

Export environment variables for the installation:

Jump Node Console

Copy
Copied!
            

$ source manifests/00-env-vars/envvars.env

Create a DPUFlavor using the following YAML:

Note

The settings below configure a DPU in Zero Trust mode, which means DPU management will be blocked from the bare-metal host.

To deploy in DPU mode, comment out the line containing dpuMode:

# dpuMode: zero-trust

manifests/01-vpc-ovn-dpudeployment/dpuflavor.yaml

Copy
Copied!
            

--- apiVersion: provisioning.dpu.nvidia.com/v1alpha1 kind: DPUFlavor metadata: name: vpc-flavor namespace: dpf-operator-system spec: dpuMode: zero-trust bfcfgParameters: - UPDATE_ATF_UEFI=yes - UPDATE_DPU_OS=yes - WITH_NIC_FW_UPDATE=yes configFiles: - operation: override path: /etc/mellanox/mlnx-bf.conf permissions: "0644" raw: | ALLOW_SHARED_RQ="no" IPSEC_FULL_OFFLOAD="no" ENABLE_ESWITCH_MULTIPORT="yes" - operation: override path: /etc/mellanox/mlnx-ovs.conf permissions: "0644" raw: | CREATE_OVS_BRIDGES="no" OVS_DOCA="yes" - operation: override path: /etc/mellanox/mlnx-sf.conf permissions: "0644" raw: "" grub: kernelParameters: - console=hvc0 - console=ttyAMA0 - earlycon=pl011,0x13010000 - fixrttc - net.ifnames=0 - biosdevname=0 - iommu.passthrough=1 - cgroup_no_v1=net_prio,net_cls - hugepagesz=2048kB - hugepages=3072 nvconfig: - device: '*' parameters: - PF_BAR2_ENABLE=0 - PER_PF_NUM_SF=1 - PF_TOTAL_SF=20 - PF_SF_BAR_SIZE=10 - NUM_PF_MSIX_VALID=0 - PF_NUM_PF_MSIX_VALID=1 - PF_NUM_PF_MSIX=228 - INTERNAL_CPU_MODEL=1 - INTERNAL_CPU_OFFLOAD_ENGINE=0 - SRIOV_EN=1 - NUM_OF_VFS=46 - LAG_RESOURCE_ALLOCATION=1 ovs: rawConfigScript: | _ovs-vsctl() { ovs-vsctl --no-wait --timeout 15 "$@" }   _ovs-vsctl set Open_vSwitch . other_config:doca-init=true _ovs-vsctl set Open_vSwitch . other_config:dpdk-max-memzones=50000 _ovs-vsctl set Open_vSwitch . other_config:hw-offload=true _ovs-vsctl set Open_vSwitch . other_config:pmd-quiet-idle=true _ovs-vsctl set Open_vSwitch . other_config:max-idle=20000 _ovs-vsctl set Open_vSwitch . other_config:max-revalidator=5000 _ovs-vsctl --if-exists del-br ovsbr1 _ovs-vsctl --if-exists del-br ovsbr2 _ovs-vsctl --may-exist add-br br-sfc _ovs-vsctl set bridge br-sfc datapath_type=netdev _ovs-vsctl set bridge br-sfc fail_mode=secure _ovs-vsctl --may-exist add-port br-sfc p0 _ovs-vsctl set Interface p0 type=dpdk _ovs-vsctl set Interface p0 mtu_request=9216 _ovs-vsctl set Port p0 external_ids:dpf-type=physical

Apply all of the YAML files mentioned above using the following command:

Jump Node Console

Copy
Copied!
            

$ cat manifests/01-vpc-ovn-dpudeployment/dpuflavor.yaml | envsubst | kubectl apply -f -

Change the dpudeployment.yaml file to reference the DPUFlavor suited for performance:

manifests/01-vpc-ovn-dpudeployment/dpudeployment.yaml

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUDeployment metadata: name: vpc-ovn namespace: dpf-operator-system spec: dpus: bfb: bf-bundle flavor: vpc-flavor nodeEffect: noEffect: true dpuSets: - nameSuffix: "dpuset1" nodeSelector: matchLabels: feature.node.kubernetes.io/dpu-enabled: "true" services: ovn-central: serviceTemplate: ovn-central serviceConfiguration: ovn-central ovn-controller: serviceTemplate: ovn-controller serviceConfiguration: ovn-controller vpc-ovn-controller: serviceTemplate: vpc-ovn-controller serviceConfiguration: vpc-ovn-controller vpc-ovn-node: serviceTemplate: vpc-ovn-node serviceConfiguration: vpc-ovn-node serviceChains: switches: - ports: - serviceInterface: matchLabels: ovn.vpc.dpu.nvidia.com/interface: p0 - serviceInterface: matchLabels: ovn.vpc.dpu.nvidia.com/interface: ovn-vtep-patch - serviceInterface: matchLabels: ovn.vpc.dpu.nvidia.com/interface: ovn-ext-patch

The OVN VPC service consists of the following components:

  1. ovn-central: Deployed in the target cluster (runs northd, sb_db, nb_db)

  2. ovn-controller: Deployed in the DPU cluster

  3. vpc-ovn-controller: VPC controller in the target cluster

  4. vpc-ovn-node: VPC node agent in the DPU cluster

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceConfiguration metadata: name: ovn-central namespace: dpf-operator-system spec: deploymentServiceName: ovn-central upgradePolicy: applyNodeEffect: false serviceConfiguration: deployInCluster: true helmChart: values: exposedPorts: ports: ovnnb: true ovnsb: true management: ovnCentral: enabled: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "node-role.kubernetes.io/master" operator: Exists - matchExpressions: - key: "node-role.kubernetes.io/control-plane" operator: Exists tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/control-plane operator: Exists effect: NoSchedule

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceConfiguration metadata: name: ovn-controller namespace: dpf-operator-system spec: deploymentServiceName: ovn-controller upgradePolicy: applyNodeEffect: false serviceConfiguration: helmChart: values: dpu: ovnController: enabled: true

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceConfiguration metadata: name: vpc-ovn-controller namespace: dpf-operator-system spec: deploymentServiceName: vpc-ovn-controller upgradePolicy: applyNodeEffect: false serviceConfiguration: deployInCluster: true helmChart: values: host: vpcOVNController: enabled: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "node-role.kubernetes.io/master" operator: Exists - matchExpressions: - key: "node-role.kubernetes.io/control-plane" operator: Exists

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceConfiguration metadata: name: vpc-ovn-node namespace: dpf-operator-system spec: deploymentServiceName: vpc-ovn-node upgradePolicy: applyNodeEffect: false serviceConfiguration: helmChart: values: dpu: vpcOVNNode: enabled: true initContainers: vpcOVNDpuProvisioner: env: ovnSbEndpoint: "tcp:$TARGETCLUSTER_OVN_CENTRAL_IP:30642" ipRequests: - name: "vtep" poolName: "vpc-ippool-vtep" - name: "gateway" poolName: "vpc-ippool-gateway"

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceTemplate metadata: name: ovn-central namespace: dpf-operator-system spec: deploymentServiceName: ovn-central helmChart: source: repoURL: $HELM_REGISTRY_REPO_URL version: $TAG chart: ovn-chart

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceTemplate metadata: name: ovn-controller namespace: dpf-operator-system spec: deploymentServiceName: ovn-controller helmChart: source: repoURL: $HELM_REGISTRY_REPO_URL version: $TAG chart: ovn-chart

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceTemplate metadata: name: vpc-ovn-controller namespace: dpf-operator-system spec: deploymentServiceName: vpc-ovn-controller helmChart: source: repoURL: $HELM_REGISTRY_REPO_URL version: $TAG chart: dpf-vpc-ovn

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceTemplate metadata: name: vpc-ovn-node namespace: dpf-operator-system spec: deploymentServiceName: vpc-ovn-node helmChart: source: repoURL: $HELM_REGISTRY_REPO_URL version: $TAG chart: dpf-vpc-ovn

Jump Node Console

Copy
Copied!
            

--- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceIPAM metadata: name: vpc-ippool-vtep namespace: dpf-operator-system spec: metadata: labels: ovn.vpc.dpu.nvidia.com/pool: vpc-ippool-vtep ipv4Subnet: subnet: $VTEP_CIDR gateway: $VTEP_GATEWAY perNodeIPCount: 4 --- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceIPAM metadata: name: vpc-ippool-gateway namespace: dpf-operator-system spec: metadata: labels: ovn.vpc.dpu.nvidia.com/pool: vpc-ippool-gateway ipv4Subnet: subnet: $EXTERNAL_CIDR gateway: $EXTERNAL_GATEWAY perNodeIPCount: 4

Jump Node Console

Copy
Copied!
            

--- apiVersion: "svc.dpu.nvidia.com/v1alpha1" kind: DPUServiceInterface metadata: name: p0 namespace: dpf-operator-system spec: template: spec: template: metadata: labels: ovn.vpc.dpu.nvidia.com/interface: "p0" spec: interfaceType: physical physical: interfaceName: p0 --- apiVersion: "svc.dpu.nvidia.com/v1alpha1" kind: DPUServiceInterface metadata: name: ovn-vtep-patch namespace: dpf-operator-system spec: template: spec: template: metadata: labels: ovn.vpc.dpu.nvidia.com/interface: "ovn-vtep-patch" spec: interfaceType: ovn ovn: externalBridge: br-ovn-vtep --- apiVersion: "svc.dpu.nvidia.com/v1alpha1" kind: DPUServiceInterface metadata: name: ovn-ext-patch namespace: dpf-operator-system spec: template: spec: template: metadata: labels: ovn.vpc.dpu.nvidia.com/interface: "ovn-ext-patch" spec: interfaceType: ovn ovn: externalBridge: br-ovn-ext

Apply all of the YAML files mentioned above using the following command:

Jump Node Console

Copy
Copied!
            

$ cat manifests/02-vpc-ovn-dpudeployment/* | envsubst | kubectl apply -f -

Verify the DPUService installation by ensuring that:

Note

These verification commands may need to be run multiple times to ensure the conditions are met.

Jump Node Console

Copy
Copied!
            

$ kubectl wait --for=condition=ApplicationsReconciled --namespace dpf-operator-system dpuservices --all   $ kubectl wait --for=condition=DPUIPAMObjectReconciled --namespace dpf-operator-system dpuserviceipam --all   $ kubectl wait --for=condition=ServiceInterfaceSetReconciled --namespace dpf-operator-system dpuserviceinterface --all   $ kubectl wait --for=condition=ServiceChainSetReconciled --namespace dpf-operator-system dpuservicechain --all

To follow the progress of DPU provisioning, run the following command to check its current phase:

Jump Node Console

Copy
Copied!
            

$ watch -n10 "kubectl describe dpu -n dpf-operator-system | grep 'Node Name\|Type\|Last\|Phase'" Every 10.0s: kubectl describe dpu -n dpf-operator-system | grep 'Node Name\|Type\|Last\|Phase' setup5-jump: Wed May 21 10:45:44 2025   Dpu Node Name:                                       dpu-node-mt2402xz0f7x     Type: InternalIP Type: Hostname Last Transition Time: 2025-05-21T07:23:09Z Type: Initialized Last Transition Time: 2025-05-21T07:23:09Z Type: BFBReady Last Transition Time: 2025-05-21T07:23:11Z Type: NodeEffectReady Last Transition Time: 2025-05-21T07:23:15Z Type: InterfaceInitialized Last Transition Time: 2025-05-21T07:23:17Z Type: FWConfigured Last Transition Time: 2025-05-21T07:23:18Z Type: BFBPrepared Last Transition Time: 2025-05-21T07:27:25Z Type: OSInstalled Last Transition Time: 2025-05-21T07:44:54Z Type: Rebooted   Dpu Node Name:                                       dpu-node-mt2402xz0f80     Type: InternalIP Type: Hostname Last Transition Time: 2025-05-21T07:23:08Z Type: Initialized Last Transition Time: 2025-05-21T07:23:09Z Type: BFBReady Last Transition Time: 2025-05-21T07:23:09Z Type: NodeEffectReady Last Transition Time: 2025-05-21T07:23:12Z Type: InterfaceInitialized Last Transition Time: 2025-05-21T07:23:14Z Type: FWConfigured Last Transition Time: 2025-05-21T07:23:15Z Type: BFBPrepared Last Transition Time: 2025-05-21T07:27:23Z Type: OSInstalled Last Transition Time: 2025-05-21T07:45:01Z Type: Rebooted   ...

Wait for the Rebooted stage and then Power Cycle the bare-metal host manual.

After the DPU is up, run following command for each DPU worker:

Jump Node Console

Copy
Copied!
            

$ kubectl annotate dpunodes -n dpf-operator-system dpu-node-mt2402xz0f7x provisioning.dpu.nvidia.com/dpunode-external-reboot-required-   $ kubectl annotate dpunodes -n dpf-operator-system dpu-node-mt2402xz0f80 provisioning.dpu.nvidia.com/dpunode-external-reboot-required-   $ kubectl annotate dpunodes -n dpf-operator-system dpu-node-mt2402xz0f8g provisioning.dpu.nvidia.com/dpunode-external-reboot-required-   $ kubectl annotate dpunodes -n dpf-operator-system dpu-node-mt2402xz0f9n provisioning.dpu.nvidia.com/dpunode-external-reboot-required-

At this point, the DPU workers should be added to the cluster. As they being added to the cluster, the DPUs are provisioned.

Jump Node Console

Copy
Copied!
            

$ watch -n10 "kubectl describe dpu -n dpf-operator-system | grep 'Node Name\|Type\|Last\|Phase'" Every 10.0s: kubectl describe dpu -n dpf-operator-system | grep 'Node Name\|Type\|Last\|Phase' setup5-jump: Wed May 21 10:45:44 2025   Dpu Node Name: dpuworker1 Type: InternalIP Type: Hostname Last Transition Time: 2025-05-21T07:23:09Z Type: Initialized Last Transition Time: 2025-05-21T07:23:09Z Type: BFBReady Last Transition Time: 2025-05-21T07:23:11Z Type: NodeEffectReady Last Transition Time: 2025-05-21T07:23:15Z Type: InterfaceInitialized Last Transition Time: 2025-05-21T07:23:17Z Type: FWConfigured Last Transition Time: 2025-05-21T07:23:18Z Type: BFBPrepared Last Transition Time: 2025-05-21T07:27:25Z Type: OSInstalled Last Transition Time: 2025-05-21T07:44:54Z Type: Rebooted Last Transition Time: 2025-05-21T07:44:54Z Type: DPUClusterReady Last Transition Time: 2025-05-21T07:44:55Z Type: Ready Phase: Ready Dpu Node Name: dpuworker2 Type: InternalIP Type: Hostname Last Transition Time: 2025-05-21T07:23:08Z Type: Initialized Last Transition Time: 2025-05-21T07:23:09Z Type: BFBReady Last Transition Time: 2025-05-21T07:23:09Z Type: NodeEffectReady Last Transition Time: 2025-05-21T07:23:12Z Type: InterfaceInitialized Last Transition Time: 2025-05-21T07:23:14Z Type: FWConfigured Last Transition Time: 2025-05-21T07:23:15Z Type: BFBPrepared Last Transition Time: 2025-05-21T07:27:23Z Type: OSInstalled Last Transition Time: 2025-05-21T07:45:01Z Type: Rebooted Last Transition Time: 2025-05-21T07:45:01Z Type: DPUClusterReady Last Transition Time: 2025-05-21T07:45:02Z Type: Ready Phase: Ready   ...

Finally, validate that all the different DPU-related objects are now in the Ready state:

Jump Node Console

Copy
Copied!
            

$ kubectl -n dpf-operator-system exec deploy/dpf-operator-controller-manager -- /dpfctl describe dpudeployments NAME NAMESPACE STATUS REASON SINCE MESSAGE DPFOperatorConfig/dpfoperatorconfig dpf-operator-system Ready: True Success 114m ├─DPUServiceChains │ └─DPUServiceChain/vpc-ovn-2t97b dpf-operator-system Ready: True Success 43h ├─DPUServiceIPAMs │ └─2 DPUServiceIPAMs... dpf-operator-system Ready: True Success 42h See vpc-ippool-gateway, vpc-ippool-vtep ├─DPUServiceInterfaces │ └─5 DPUServiceInterfaces... dpf-operator-system Ready: True Success 8d See blue-vf2, ovn-ext-patch, ovn-vtep-patch, p0, red-vf2 └─DPUSets └─DPUSet/vpc-ovn-dpuset1 dpf-operator-system ├─BFB/bf-bundle dpf-operator-system Ready: True Ready 9d File: bf-bundle-3.0.0-135_25.04_ubuntu-22.04_prod.bfb, DOCA: 3.0.0 ├─DPU/mt2402xz0f7x dpf-operator-system Ready: True DPUReady 43h ├─DPU/mt2402xz0f80 dpf-operator-system Ready: True DPUReady 43h ├─DPU/mt2402xz0f8g dpf-operator-system Ready: True DPUReady 42h └─DPU/mt2402xz0f9n dpf-operator-system Ready: True DPUReady 42h     $ kubectl get secrets -n dpu-cplane-tenant1 dpu-cplane-tenant1-admin-kubeconfig -o json | jq -r '.data["admin.conf"]' | base64 --decode > /home/depuser/dpu-cluster.config   $ KUBECONFIG=/home/depuser/dpu-cluster.config k get node -A NAME STATUS ROLES AGE VERSION mt2402xz0f7x Ready <none> 43m v1.30.12 mt2402xz0f80 Ready <none> 43m v1.30.12 mt2402xz0f8g Ready <none> 43m v1.30.12 mt2402xz0f9n Ready <none> 43m v1.30.12   $ kubectl get dpu -A NAMESPACE NAME READY PHASE AGE dpf-operator-system mt2402xz0f7x True Ready 43m dpf-operator-system mt2402xz0f80 True Ready 43m dpf-operator-system mt2402xz0f8g True Ready 43m dpf-operator-system mt2402xz0f9n True Ready 43m   $ kubectl wait --for=condition=ready --namespace dpf-operator-system dpu --all dpu.provisioning.dpu.nvidia.com/mt2402xz0f7x condition met dpu.provisioning.dpu.nvidia.com/mt2402xz0f80 condition met dpu.provisioning.dpu.nvidia.com/mt2402xz0f8g condition met dpu.provisioning.dpu.nvidia.com/mt2402xz0f9n condition met

Deploy test topology

Add blue and red labels to relevant DPU Nodes. Set the values according to your environment.

Jump Node Console

Copy
Copied!
            

ki label node mt2402xz0f7x mt2402xz0f80 vpc.dpu.nvidia.com/tenant=red ki label node mt2402xz0f8g mt2402xz0f9n vpc.dpu.nvidia.com/tenant=blue

In our deployment we are going to create dual VPC environment ( blue and red ).

Change the vpc-topology-dual-vpc.yaml to following configuration:

vpc-topology-dual-vpc.yaml

Copy
Copied!
            

--- apiVersion: v1 kind: Namespace metadata: name: blue --- apiVersion: v1 kind: Namespace metadata: name: red --- apiVersion: vpc.dpu.nvidia.com/v1alpha1 kind: DPUVPC metadata: name: blue-vpc namespace: blue spec: tenant: blue isolationClassName: ovn.vpc.dpu.nvidia.com interNetworkAccess: true nodeSelector: matchLabels: vpc.dpu.nvidia.com/tenant: blue --- apiVersion: vpc.dpu.nvidia.com/v1alpha1 kind: DPUVirtualNetwork metadata: name: blue-net namespace: blue spec: vpcName: blue-vpc type: Bridged externallyRouted: true masquerade: true bridgedNetwork: ipam: ipv4: dhcp: true subnet: 192.178.0.0/16 --- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceInterface metadata: name: blue-vf2 namespace: blue spec: template: spec: nodeSelector: matchLabels: vpc.dpu.nvidia.com/tenant: blue template: spec: interfaceType: vf vf: pfID: 0 vfID: 2 virtualNetwork: blue-net parentInterfaceRef: "" --- apiVersion: vpc.dpu.nvidia.com/v1alpha1 kind: DPUVPC metadata: name: red-vpc namespace: red spec: tenant: red isolationClassName: ovn.vpc.dpu.nvidia.com interNetworkAccess: true nodeSelector: matchLabels: vpc.dpu.nvidia.com/tenant: red --- apiVersion: vpc.dpu.nvidia.com/v1alpha1 kind: DPUVirtualNetwork metadata: name: red-net namespace: red spec: vpcName: red-vpc type: Bridged externallyRouted: true masquerade: true bridgedNetwork: ipam: ipv4: dhcp: true subnet: 192.178.0.0/16 --- apiVersion: svc.dpu.nvidia.com/v1alpha1 kind: DPUServiceInterface metadata: name: red-vf2 namespace: red spec: template: spec: nodeSelector: matchLabels: vpc.dpu.nvidia.com/tenant: red template: spec: interfaceType: vf vf: pfID: 0 vfID: 2 virtualNetwork: red-net parentInterfaceRef: ""

Apply the YAML files mentioned above using the following command:

Jump Node Console

Copy
Copied!
            

$ kubectl apply -f vpc-topology-dual-vpc.yaml

Verify:

Jump Node Console

Copy
Copied!
            

$ ki get serviceinterface -A NAMESPACE NAME IFTYPE IFNAME AGE blue blue-vf2-mt2402xz0f8g vf 42h blue blue-vf2-mt2402xz0f9n vf 42h dpf-operator-system ovn-ext-patch-mt2402xz0f7x ovn 43h dpf-operator-system ovn-ext-patch-mt2402xz0f80 ovn 43h dpf-operator-system ovn-ext-patch-mt2402xz0f8g ovn 43h dpf-operator-system ovn-ext-patch-mt2402xz0f9n ovn 43h dpf-operator-system ovn-vtep-patch-mt2402xz0f7x ovn 43h dpf-operator-system ovn-vtep-patch-mt2402xz0f80 ovn 43h dpf-operator-system ovn-vtep-patch-mt2402xz0f8g ovn 43h dpf-operator-system ovn-vtep-patch-mt2402xz0f9n ovn 43h dpf-operator-system p0-mt2402xz0f7x physical 43h dpf-operator-system p0-mt2402xz0f80 physical 43h dpf-operator-system p0-mt2402xz0f8g physical 43h dpf-operator-system p0-mt2402xz0f9n physical 43h red red-vf2-mt2402xz0f7x vf 42h red red-vf2-mt2402xz0f80 vf 42h     $ kubectl get dpuvpcs.vpc.dpu.nvidia.com -A NAMESPACE NAME READY PHASE AGE blue blue-vpc True Success 50m red red-vpc True Success 50m     $ ki get serviceinterface -A -o yaml -n red ... status: conditions: - lastTransitionTime: "2025-07-14T13:43:31Z" message: "" observedGeneration: 1 reason: Success status: "True" type: Ready - lastTransitionTime: "2025-07-14T13:43:31Z" message: "" observedGeneration: 1 reason: Success status: "True" type: ServiceInterfaceReconciled observedGeneration: 1 ...   $ ki get serviceinterface -A -o yaml -n blue ... status: conditions: - lastTransitionTime: "2025-07-14T13:43:31Z" message: "" observedGeneration: 1 reason: Success status: "True" type: Ready - lastTransitionTime: "2025-07-14T13:43:31Z" message: "" observedGeneration: 1 reason: Success status: "True" type: ServiceInterfaceReconciled observedGeneration: 1 ...

© Copyright 2025, NVIDIA. Last updated on Jul 17, 2025.